In the field of multimodal sentiment analysis (MSA), a few studies have leveraged the inherent modality correlation information stored in samples for self-supervised learning. However, they feed the training pairs in a random order without consideration of difficulty. Without human annotation, the generated training pairs of self-supervised learning often contain noise. If noisy or hard pairs are used for training at the easy stage, the model might be stuck in bad local optimum. In this paper, we inject curriculum learning into weakly supervised modality correlation learning. The weakly supervised correlation learning leverages the label information to generate scores for negative pairs to learn a more discriminative embedding space, where negative pairs are defined as two unimodal embeddings from different samples. To assist the correlation learning, we feed the training pairs to the model according to difficulty by the proposed curriculum learning, which consists of elaborately designed scoring and feeding functions. The scoring function computes the difficulty of pairs using pre-trained and current correlation predictors, where the pairs with large losses are defined as hard pairs. Notably, the hardest pairs are discarded in our algorithm, which are assumed as noisy pairs. Moreover, the feeding function takes the difference of correlation losses as feedback to determine the feeding actions (`stay', `step back', or `step forward'). The proposed method reaches state-of-the-art performance on MSA.
translated by 谷歌翻译
Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance. However, these models focus only on understanding tasks utilizing encoder-only architecture. In this paper, we propose ERNIE-UniX2, a unified cross-lingual cross-modal pre-training framework for both generation and understanding tasks. ERNIE-UniX2 integrates multiple pre-training paradigms (e.g., contrastive learning and language modeling) based on encoder-decoder architecture and attempts to learn a better joint representation across languages and modalities. Furthermore, ERNIE-UniX2 can be seamlessly fine-tuned for varieties of generation and understanding downstream tasks. Pre-trained on both multilingual text-only and image-text datasets, ERNIE-UniX2 achieves SOTA results on various cross-lingual cross-modal generation and understanding tasks such as multimodal machine translation and multilingual visual question answering.
translated by 谷歌翻译
时间基础旨在找到目标视频时刻,该目标瞬间与未修剪视频中给定的句子查询相对应。但是,最近的作品发现现有方法遇到了严重的时间偏见问题。这些方法并不是根据训练集中查询的时间偏见过度依赖基于视觉文本语义对齐的目标矩位置。为此,本文提出了一个新颖的培训框架,用于接地模型,以使用洗牌视频解决时间偏见问题而不会失去接地精度。我们的框架介绍了两个辅助任务,即跨模式匹配和时间订单歧视,以促进接地模型训练。跨模式匹配任务利用了洗牌和原始视频之间的内容一致性迫使接地模型以挖掘视觉内容以匹配语义的查询。时间秩序歧视任务利用时间顺序的差异来加强对长期时间环境的理解。关于Charades-STA和活动网字幕的广泛实验证明了我们方法可以减轻对时间偏差的依赖并增强模型对不同时间分布的概括能力的有效性。代码可从https://github.com/haojc/shufflingvideosfortsg获得。
translated by 谷歌翻译
预训练的模型(PTM)已成为自然语言处理和计算机视觉下游任务的基本骨干。尽管通过在BAIDU地图上将通用PTM应用于与地理相关的任务中获得的最初收益,但随着时间的流逝,表现平稳。造成该平稳的主要原因之一是缺乏通用PTM中的可用地理知识。为了解决这个问题,在本文中,我们介绍了Ernie-Geol,这是一个地理和语言预培训模型,设计和开发了用于改善Baidu Maps的地理相关任务。 Ernie-Geol经过精心设计,旨在通过预先培训从包含丰富地理知识的异质图生成的大规模数据来学习地理语言的普遍表示。大规模现实数据集进行的广泛定量和定性实验证明了Ernie-Geol的优势和有效性。自2021年4月以来,Ernie-Geol已经在百度地图上部署在生产中,这显着受益于各种下游任务的性能。这表明Ernie-Geol可以作为各种与地理有关的任务的基本骨干。
translated by 谷歌翻译
用于图像文本生成任务的传统方法主要是分别解决自然双向生成任务,专注于设计任务特定的框架以提高所生成的样本的质量和保真度。最近,Vision-Language预训练模型大大提高了图像到文本生成任务的性能,但仍未开发出用于文本到图像综合任务的大规模预训练模型。在本文中,我们提出了一个具有变压器模型的双向图像文本生成的统一生成的预训练框架的Ernie-Vi​​lg。基于图像量化模型,我们将图像生成和文本生成标准为在文本/图像输入上调节的自回归生成任务。双向图像文本生成建模简化了视觉和语言的语义对齐。对于文本到图像生成过程,我们进一步提出了端到端的训练方法,共同学习视觉序列发生器和图像重建。为了探讨双向文本图像生成的大规模预培训景观,我们在大规模数据集中培训了100亿参数的Ernie-Vi​​lg模型,以145百万(中文)图像 - 文本对实现了达到的状态 - 文本到图像和图像到文本任务的最佳性能,以便在MS-Coco上获取7.9的FID,用于文本到图像合成以及用于图像标题的Coco-CN和AIC-ICC的最佳结果。
translated by 谷歌翻译
预先接受的语言模型实现了最先进的导致各种自然语言处理(NLP)任务。 GPT-3表明,缩放预先训练的语言模型可以进一步利用它们的巨大潜力。最近提出了一个名为Ernie 3.0的统一框架,以预先培训大型知识增强型号,并培训了具有10亿参数的模型。 Ernie 3.0在各种NLP任务上表现出最先进的模型。为了探讨缩放的表现,我们培养了百卢比的3.0泰坦参数型号,在PaddlePaddle平台上有高达260亿参数的泰坦。此外,我们设计了一种自我监督的对抗性损失和可控语言建模损失,以使ERNIE 3.0 TITAN产生可信和可控的文本。为了减少计算开销和碳排放,我们向Ernie 3.0泰坦提出了一个在线蒸馏框架,教师模型将同时教授学生和培训。埃塞尼3.0泰坦是迄今为止最大的中国密集预训练模型。经验结果表明,Ernie 3.0泰坦在68个NLP数据集中优于最先进的模型。
translated by 谷歌翻译
面向任务导向的对话系统已经受到获得大规模和高质量的注释对话的困难困扰。此外,大多数公开的数据集仅包括书面对话,这不足以反映实际口头对话系统中的实际人类行为。在本文中,我们提出了面向任务的对话数据增强(TOD-DA),这是一种新型模型 - 不可知的数据增强范例,以提高面向任务对话建模的鲁棒性。 TOD-DA由两个模块组成:1)对话丰富,以扩展关于易于执行数据稀疏性的任务对话的培训数据,用于宽松数据稀疏性和2)口语对话模拟器,以模仿各种粒度的口语样式表达和语音识别错误,以弥合书面之间的差距和口头对话。通过这样的设计,我们的方法在DSTC10 Track2的两个任务中排名第一,这是针对口语对话的任务对话建模的基准,展示了我们提出的TOD-DA的优势和有效性。
translated by 谷歌翻译
AI代理应该能够与人类协调以解决任务。我们考虑培训加强学习(RL)代理的问题,而不使用任何人类数据,即在零射击设置中,使其能够与人类合作。标准RL代理商通过自我播放学习。不幸的是,这些代理商只知道如何与自己合作,通常不会与人类的看不见的伙伴表现良好。如何以零射时的方式训练强大的代理的方法仍然需要研究。从最大熵RL激励,我们推出了集中的人口熵目标,以便于学习各种各样的代理商,后来用于培训坚强的代理与看不见的合作伙伴合作。所提出的方法与基线方法相比,其有效性,包括自助PPO,在流行的过度烹制的游戏环境中,包括自行式PPO,标准群体的培训(PBT)和基于轨迹分集的PBT。我们还通过真实人类进行在线实验,并进一步证明了该方法在现实世界中的功效。显示实验结果的补充视频可在https://youtu.be/xh-fkd0aake上获得。
translated by 谷歌翻译
记住和遗忘机制是人类学习记忆系统中同一硬币的两侧。灵感来自人类脑记忆机制,现代机器学习系统一直在努力通过更好地记住终身学习能力的机器,同时推动遗忘为敌人来克服。尽管如此,这个想法可能只能看到半张图片。直到最近,越来越多的研究人员认为,大脑出生忘记,即忘记是抽象,丰富和灵活的陈述的自然和积极的过程。本文通过人工神经网络积极遗忘机制提出了一种学习模型。主动遗忘机制(AFM)通过“即插即用”遗忘层(P \&PF)引入神经网络,由具有内部调节策略(IRS)的抑制神经元组成,以调整自己的消光率通过横向抑制机制和外部调节策略(ERS)通过抑制机制调节兴奋性神经元的消光速率。实验研究表明,P \&PF提供了令人惊讶的益处:自适应结构,强大的泛化,长期学习和记忆,以及对数据和参数扰动的鲁棒性。这项工作阐明了忘记学习过程的重要性,并提供了新的视角,了解神经网络的潜在机制。
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译